26 research outputs found

    Towards supporting multiple semantics of named graphs using N3 rules

    Get PDF
    Semantic Web applications often require the partitioning of triples into subgraphs, and then associating them with useful metadata (e.g., provenance). This led to the introduction of RDF datasets, with each RDF dataset comprising a default graph and zero or more named graphs. However, due to differences in RDF implementations, no consensus could be reached on a standard semantics; and a range of different dataset semantics are currently assumed. For an RDF system not be limited to only a subset of online RDF datasets, the system would need to be extended to support different dataset semantics—exactly the problem that eluded consensus before. In this paper, we transpose this problem to Notation3 Logic, an RDF-based rule language that similarly allows citing graphs within RDF documents. We propose a solution where an N3 author can directly indicate the intended semantics of a cited graph— possibly, combining multiple semantics within a single document. We supply an initial set of companion N3 rules, which implement a number of RDF dataset semantics, which allow an N3-compliant system to easily support multiple different semantics

    Notation3 as an Existential Rule Language

    Full text link
    Notation3 Logic (\nthree) is an extension of RDF that allows the user to write rules introducing new blank nodes to RDF graphs. Many applications (e.g., ontology mapping) rely on this feature as blank nodes -- used directly or in auxiliary constructs -- are omnipresent on the Web. However, the number of fast \nthree reasoners covering this very important feature of the logic is rather limited. On the other hand, there are engines like VLog or Nemo which do not directly support Semantic Web rule formats but which are developed and optimized for very similar constructs: existential rules. In this paper, we investigate the relation between \nthree rules with blank nodes in their heads and existential rules. We identify a subset of \nthree which can be mapped directly to existential rules and define such a mapping preserving the equivalence of \nthree formulae. In order to also illustrate that in some cases \nthree reasoning could benefit from our translation, we then employ this mapping in an implementation to compare the performance of the \nthree reasoners EYE and cwm to VLog and Nemo on \nthree rules and their mapped counterparts. Our tests show that the existential rule reasoners perform particularly well for use cases containing many facts while especially the EYE reasoner is very fast when dealing with a high number of dependent rules. We thus provide a tool enabling the Semantic Web community to directly use existing and future existential rule reasoners and benefit from the findings of this active community

    Implicit quantification made explicit : how to interpret blank nodes and universal variables in Notation3 Logic

    Get PDF
    Since the invention of Notation3 Logic, several years have passed in which the theory has been refined and applied in different reasoning engines like Cwm, EYE, and FuXi. But despite these developments, a clear formal definition of Notation3’s semantics is still missing. This does not only form an obstacle for the formal investigation of that logic and its relations to other formalisms, it has also practical consequences: in many cases the interpretations of the same formula differ between reasoning engines. In this paper we tackle one of the main sources of that problem, namely the uncertainty about implicit quantification. This refers to Notation3’s ability to use bound variables for which the universal or existential quantifiers are not explicitly stated, but implicitly assumed. We provide a tool for clarification through the definition of a core logic for Notation3 that only supports explicit quantification. We specify an attribute grammar which maps Notation3 formulas to that logic according to the different interpretations and thereby define the semantics of Notation3. This grammar is then implemented and used to test the impact of the differences between interpretations on practical cases. Our dataset includes Notation3 implementations from former research projects and test cases developed for the reasoner EYE. We find that 31% of these files are understood differently by different reasoners. We further analyse these cases and categorize them in different classes of which we consider one most harmful: if a file is manually written by a user and no specific built-in predicates are used (13% of our critical files), it is unlikely that this user is aware of possible differences. We therefore argue the need to come to an agreement on implicit quantification, and discuss the different possibilities

    RDF graph validation using rule-based reasoning

    Get PDF
    The correct functioning of Semantic Web applications requires that given RDF graphs adhere to an expected shape. This shape depends on the RDF graph and the application's supported entailments of that graph. During validation, RDF graphs are assessed against sets of constraints, and found violations help refining the RDF graphs. However, existing validation approaches cannot always explain the root causes of violations (inhibiting refinement), and cannot fully match the entailments supported during validation with those supported by the application. These approaches cannot accurately validate RDF graphs, or combine multiple systems, deteriorating the validator's performance. In this paper, we present an alternative validation approach using rule-based reasoning, capable of fully customizing the used inferencing steps. We compare to existing approaches, and present a formal ground and practical implementation "Validatrr", based on N3Logic and the EYE reasoner. Our approach - supporting an equivalent number of constraint types compared to the state of the art - better explains the root cause of the violations due to the reasoner's generated logical proof, and returns an accurate number of violations due to the customizable inferencing rule set. Performance evaluation shows that Validatrr is performant for smaller datasets, and scales linearly w.r.t. the RDF graph size. The detailed root cause explanations can guide future validation report description specifications, and the fine-grained level of configuration can be employed to support different constraint languages. This foundation allows further research into handling recursion, validating RDF graphs based on their generation description, and providing automatic refinement suggestions

    DIVIDE : adaptive context-aware query derivation for IoT data streams

    Get PDF
    In the Internet of Things, it is a challenging task to inte-grate & analyze high velocity sensor data with domain knowledge &context information in real-time. Semantic IoT platforms typically con-sist of stream processing components that use Semantic Web technologiesto run a set of fixed queries processing the IoT data streams. Configur-ing these queries is still a manual task. To deal with changes in contextinformation, which happen regularly in IoT domains, queries typicallyrequire reasoning on all sensor data in real-time to derive relevant sen-sors & events. This can be an issue in real-time, as expressive reasoningis required to deal with the complexity of many IoT domains. To solvethese issues, this paper presents DIVIDE. DIVIDE automatically derivesqueries for stream processing components in an adaptive, context-awareway. When the context changes, it derives through reasoning which sen-sors & observations to filter, given the context & a use case goal, withoutrequiring any more reasoning in real-time. This paper presents the detailsof DIVIDE, and performs evaluations on a healthcare example showinghow it can reduce real-time processing times, scale better when there aremore sensors & observations, and can run efficiently on low-end devices

    Using Rules to Generate and ExecuteWorkflows in Smart Factories

    Get PDF
    In modern factories, different machines and devices offering their services, such as producing parts or simply providing information, become more and more important. The number and diversity of such devices is increasing and the task of combining available resources into workflows becomes a challenge which can hardly be handled by a human user. In this paper we describe how we use RESTdesc, a formalism to semantically describe possible actions of RESTful Web APIs via existential rules to automatically generate and execute such workflows. Our approach makes use of Notation3 reasoners and their ability to produce proofs. These proofs are interpreted as workflow descriptions which can be easily executed and updated. The latter makes our approach very adaptable to unforeseen situations. By using one rule per possible API call, our system is very modular and easy to maintain; services can be readily added or removed. Our implementation shows how the use of rule-based reasoning can significantly improve the daily work in today’s factories

    Correcting Knowledge Base Assertions

    Get PDF
    The usefulness and usability of knowledge bases (KBs) is often limited by quality issues. One common issue is the presence of erroneous assertions, often caused by lexical or semantic confusion. We study the problem of correcting such assertions, and present a general correction framework which combines lexical matching, semantic embedding, soft constraint mining and semantic consistency checking. The framework is evaluated using DBpedia and an enterprise medical KB
    corecore